111 research outputs found

    The cavity approach for Steiner trees packing problems

    Full text link
    The Belief Propagation approximation, or cavity method, has been recently applied to several combinatorial optimization problems in its zero-temperature implementation, the max-sum algorithm. In particular, recent developments to solve the edge-disjoint paths problem and the prize-collecting Steiner tree problem on graphs have shown remarkable results for several classes of graphs and for benchmark instances. Here we propose a generalization of these techniques for two variants of the Steiner trees packing problem where multiple "interacting" trees have to be sought within a given graph. Depending on the interaction among trees we distinguish the vertex-disjoint Steiner trees problem, where trees cannot share nodes, from the edge-disjoint Steiner trees problem, where edges cannot be shared by trees but nodes can be members of multiple trees. Several practical problems of huge interest in network design can be mapped into these two variants, for instance, the physical design of Very Large Scale Integration (VLSI) chips. The formalism described here relies on two components edge-variables that allows us to formulate a massage-passing algorithm for the V-DStP and two algorithms for the E-DStP differing in the scaling of the computational time with respect to some relevant parameters. We will show that one of the two formalisms used for the edge-disjoint variant allow us to map the max-sum update equations into a weighted maximum matching problem over proper bipartite graphs. We developed a heuristic procedure based on the max-sum equations that shows excellent performance in synthetic networks (in particular outperforming standard multi-step greedy procedures by large margins) and on large benchmark instances of VLSI for which the optimal solution is known, on which the algorithm found the optimum in two cases and the gap to optimality was never larger than 4 %

    On the performance of a cavity method based algorithm for the Prize-Collecting Steiner Tree Problem on graphs

    Get PDF
    We study the behavior of an algorithm derived from the cavity method for the Prize-Collecting Steiner Tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks networks and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the DHEA solver, a Branch and Cut Linear/Integer Programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two post-processing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases

    Loop corrections in spin models through density consistency

    Get PDF
    Computing marginal distributions of discrete or semidiscrete Markov random fields (MRFs) is a fundamental, generally intractable problem with a vast number of applications in virtually all fields of science. We present a new family of computational schemes to approximately calculate the marginals of discrete MRFs. This method shares some desirable properties with belief propagation, in particular, providing exact marginals on acyclic graphs, but it differs with the latter in that it includes some loop corrections; i.e., it takes into account correlations coming from all cycles in the factor graph. It is also similar to the adaptive Thouless-Anderson-Palmer method, but it differs with the latter in that the consistency is not on the first two moments of the distribution but rather on the value of its density on a subset of values. The results on finite-dimensional Isinglike models show a significant improvement with respect to the Bethe-Peierls (tree) approximation in all cases and with respect to the plaquette cluster variational method approximation in many cases. In particular, for the critical inverse temperature βc\beta_{c} of the homogeneous hypercubic lattice, the expansion of (dβc)−1\left(d\beta_{c}\right)^{-1} around d=∞d=\infty of the proposed scheme is exact up to the d−4d^{-4} order, whereas the two latter are exact only up to the d−2d^{-2} order.Comment: 12 pages, 3 figures, 1 tabl

    Estimating the size of the solution space of metabolic networks

    Get PDF
    In this work we propose a novel algorithmic strategy that allows for an efficient characterization of the whole set of stable fluxes compatible with the metabolic constraints. The algorithm, based on the well-known Bethe approximation, allows the computation in polynomial time of the volume of a non full-dimensional convex polytope in high dimensions. The result of our algorithm match closely the prediction of Monte Carlo based estimations of the flux distributions of the Red Blood Cell metabolic network but in incomparably shorter time. We also analyze the statistical properties of the average fluxes of the reactions in the E-Coli metabolic network and finally to test the effect of gene knock-outs on the size of the solution space of the E-Coli central metabolism.Comment: 8 pages, 7 pdf figure

    Efficient LDPC Codes over GF(q) for Lossy Data Compression

    Full text link
    In this paper we consider the lossy compression of a binary symmetric source. We present a scheme that provides a low complexity lossy compressor with near optimal empirical performance. The proposed scheme is based on b-reduced ultra-sparse LDPC codes over GF(q). Encoding is performed by the Reinforced Belief Propagation algorithm, a variant of Belief Propagation. The computational complexity at the encoder is O(.n.q.log q), where is the average degree of the check nodes. For our code ensemble, decoding can be performed iteratively following the inverse steps of the leaf removal algorithm. For a sparse parity-check matrix the number of needed operations is O(n).Comment: 5 pages, 3 figure

    Predicting epidemic evolution on contact networks from partial observations

    Full text link
    The massive employment of computational models in network epidemiology calls for the development of improved inference methods for epidemic forecast. For simple compartment models, such as the Susceptible-Infected-Recovered model, Belief Propagation was proved to be a reliable and efficient method to identify the origin of an observed epidemics. Here we show that the same method can be applied to predict the future evolution of an epidemic outbreak from partial observations at the early stage of the dynamics. The results obtained using Belief Propagation are compared with Monte Carlo direct sampling in the case of SIR model on random (regular and power-law) graphs for different observation methods and on an example of real-world contact network. Belief Propagation gives in general a better prediction that direct sampling, although the quality of the prediction depends on the quantity under study (e.g. marginals of individual states, epidemic size, extinction-time distribution) and on the actual number of observed nodes that are infected before the observation time

    Contamination source inference in water distribution networks

    Get PDF
    We study the inference of the origin and the pattern of contamination in water distribution networks. We assume a simplified model for the dyanmics of the contamination spread inside a water distribution network, and assume that at some random location a sensor detects the presence of contaminants. We transform the source location problem into an optimization problem by considering discrete times and a binary contaminated/not contaminated state for the nodes of the network. The resulting problem is solved by Mixed Integer Linear Programming. We test our results on random networks as well as in the Modena city network

    An analytic approximation of the feasible space of metabolic networks

    Full text link
    Assuming a steady-state condition within a cell, metabolic fluxes satisfy an under-determined linear system of stoichiometric equations. Characterizing the space of fluxes that satisfy such equations along with given bounds (and possibly additional relevant constraints) is considered of utmost importance for the understanding of cellular metabolism. Extreme values for each individual flux can be computed with Linear Programming (as Flux Balance Analysis), and their marginal distributions can be approximately computed with Monte-Carlo sampling. Here we present an approximate analytic method for the latter task based on Expectation Propagation equations that does not involve sampling and can achieve much better predictions than other existing analytic methods. The method is iterative, and its computation time is dominated by one matrix inversion per iteration. With respect to sampling, we show through extensive simulation that it has some advantages including computation time, and the ability to efficiently fix empirically estimated distributions of fluxes
    • …
    corecore